翻訳と辞書
Words near each other
・ Cross Damon
・ Cross Days
・ Cross de Atapuerca
・ Cross de l'Acier
・ Cross de San Sebastián
・ Cross Deep House
・ Cross della Vallagarina
・ Cross diatreme
・ Cross dyke
・ Cross Edge
・ Cross education
・ Cross Egypt Challenge
・ Cross elasticity of demand
・ Cross End
・ Cross Enterprise Document Sharing
Cross entropy
・ Cross Examination Debate Association
・ Cross Fell
・ Cross File Transfer
・ Cross Fire (album)
・ Cross Fire (film)
・ Cross Fire (novel)
・ Cross fleury
・ Cross Florida Barge Canal
・ Cross fluid
・ Cross FM
・ Cross for Bravery
・ Cross for Courage and Fidelity
・ Cross for Merit in War
・ Cross for Military Valour


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Cross entropy : ウィキペディア英語版
Cross entropy

In information theory, the cross entropy between two probability distributions over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set, if a coding scheme is used that is optimized for an "unnatural" probability distribution q, rather than the "true" distribution p.
The cross entropy for the distributions p and q over a given set is defined as follows:
:H(p, q) = \operatorname_p(q ) = H(p) + D_}(p || q) is the Kullback–Leibler divergence of q from p (also known as the ''relative entropy'' of ''p'' with respect to ''q'' — note the reversal of emphasis).
For discrete p and q this means
:H(p, q) = -\sum_x p(x)\, \log q(x). \!
The situation for continuous distributions is analogous:
:-\int_X p(x)\, \log q(x)\, dx. \!
NB: The notation H(p,q) is also used for a different concept, the joint entropy of p and q.
== Motivation ==
In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value x_i out of a set of possibilities X can be seen as representing an implicit probability distribution q(x_i) = 2^ over X, where l_i is the length of the code for x_i in bits. Therefore, cross entropy can be interpreted as the expected message-length per datum when a wrong distribution Q is assumed, however the data actually follows a distribution P — that is why the expectation is taken over the probability distribution P and not Q.
:H(p, q) = \operatorname_p() = \operatorname_p\left(\frac\right )
:H(p, q) = \sum_ p(x_i)\, \log \frac \!
:H(p, q) = -\sum_x p(x)\, \log q(x). \!

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Cross entropy」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.